Openshift 3.6 on Openstack – developer cluster setup

The ECRP Project uses Kubernetes/Openshift as the base for its Cloud-Robotics PaaS.  Apart from running robotic applications distributed across robots and clouds, we wanted to assess whether latency to the closest public data-center (Frankfurt for both AWS and GKE) would be low enough to run common SLAM and navigation apps. The short answer is YES, although our work there continues.

Thanks to the work of Seán, Bruno, and Remo, the ICCLab has a brand new Openstack cluster. The Cloud-Robotics crew decided to take it for a spin, and use some research grant money on public clouds also for other activities (e.g., FaaS / Serverless computing).

Ideally Openshift has two projects on Github (https://github.com/openshift/openshift-ansible and https://github.com/redhat-openstack/openshift-on-openstack) that should help in setting up the whole thing, but, as it has consistently been our experience since version 1.3, the Ansible scripts that should setup an Openshift Origin cluster always have some (I’m sure totally undesired) hiccups preventing them from properly doing their job. While we figure out what the problem exactly is, we still wanted to set up a quick Openshift instance that could be accessed remotely to go on with our work. Thanks to our lucky star, Openshift 3.6. comes with a handy ‘oc cluster up’ command that sets up a single-machine installation. The command ‘oc cluster join’ should even allow to add machines to the cluster, albeit (to our utmost surprise) the documentation is a bit unclear on how to exactly do it. Anyhow, in this little guide we’ll help you set up (and configure!) a single-machine Openshift 3.6 instance on Openstack. Here we go.

  • First thing first, login to your Openstack console and make sure you have a fresh Centos 7.3 image you can spawn. If not, get your hands on one and upload it to Glance.
  • Create a VM from the image (I used our “medium” flavor), set proper keys for accessing it, open port 22 and SSH to your machine.
    Note: you’ll also need to set-up proper security group rules to access the Openshift Console / API and router later on (TCP: 8443, 443, 80). A good idea is to do it while you open port 22.
  • Download the Openshift 3.6 client tools binaries: https://github.com/openshift/origin/releases
  • Install docker
$ curl -fsSL get.docker.com -o get-docker.sh
$ sh get-docker.sh
  • Set insecure docker registry. Edit your /etc/docker/daemon.json so that it looks like this:
{
 "insecure-registries": ["172.30.0.0/16"]
 }
  • Start the docker service with: sudo systemctl start docker
  • Add the centos user to the docker group
  • Setup DNS to your VM (e.g., myos.myorg.com)
  • Start the “cluster” with a public hostname and a routing-suffix (this will be used by the routes generated on your cluster)
oc cluster up --public-hostname=myos.myorg.com --routing-suffix=myos.myorg.com --service-catalog --host-data-dir=/home/centos/os_data_persisted --use-existing-config

We are also enabling the “Service catalog” feature and making sure the cluster configuration is persisted in a file in /home/centos/os_data_persisted.
This step is necessary to enable us to change the cluster configuration and then restart it.

This is the output you should get:

Starting OpenShift using openshift/origin:v3.6.0 ...
 OpenShift server started.

The server is accessible via web console at:
 https://127.0.0.1:8443

You are logged in as:
 User: developer
 Password: <any value>

To login as administrator:
 oc login -u system:admin

In order to enable access to the Template Service Broker for use with the Service Catalog, you must first grant unauthenticated access to the template service broker api.

WARNING: Enabling this access allows anyone who can see your cluster api server to provision templates within your cluster, impersonating any user in the cluster (including administrators). This can be used to gain full administrative access to your cluster. Do not allow this access unless you fully understand the implications. To enable unauthenticated access to the template service broker api, run the following command as cluster admin:

oc adm policy add-cluster-role-to-group system:openshift:templateservicebroker-client system:unauthenticated system:authenticated

WARNING: Running the above command allows unauthenticated users to access and potentially exploit your cluster.

You might want to set a proper password:

oadm config set-credentials test --username=developer --password=a_proper_password

If you run “docker ps” you’ll find out your VM is running a nice little set of containers, one for the proper API (origin), and others for example for the service catalog.

Configuring for Openstack

Now it’s the time to configure your Openshift so that it knows it’s running on Openstack and it can take advantage of nice little things such as volumes on Cinder. Of course it would be good to pass this configuration at startup, but it’s not in the list of “oc cluster up” arguments. Luckily enough, we know a thing or two about Docker, and Openshift does a good job in preserving the configuration when shutting down, so we’ll just use a pinch of salt.

This is what we should be doing: https://docs.openshift.org/latest/install_config/configuring_openstack.html

However, you can’t really log in into the “origin” container, change its config and restart it because all the other containers in the group don’t really seem to like that. So we’ll just use “docker cp” to do our thing.

Copy the original master and node config files from the container to the VM.

$ docker cp origin:/var/lib/origin/openshift.local.config/master/master-config.yaml openshift.local.config/master/master-config.yaml 
$ docker cp origin:/var/lib/origin/openshift.local.config/node-localhost/node-config.yaml openshift.local.config/node-localhost/node-config.yaml

Edit them adding the openstack provider and location of the configuration credentials file (e.g., : /var/lib/origin/openshift.local.config/master/cloud.conf).

Master:

kubernetesMasterConfig:
  ...
  apiServerArguments:
    cloud-provider:
      - "openstack"
    cloud-config:
      - "/var/lib/origin/openshift.local.config/master/cloud.conf"
  controllerArguments:
    cloud-provider:
      - "openstack"
    cloud-config:
      - "/var/lib/origin/openshift.local.config/master/cloud.conf"

Node:

nodeName:
  <instance_name> 

kubeletArguments:
  cloud-provider:
    - "openstack"
  cloud-config:
    - "/var/lib/origin/openshift.local.config/master/cloud.conf"

Create a file with your credentials to access Openstack (you can download all info from the Openstack GUI, API access section). E.g., openshift.local.config/master/cloud.conf :

[Global]
auth-url = <OS_AUTH_URL>
username = <OS_USERNAME>
password = <password>
domain-id = <OS_USER_DOMAIN_ID>
tenant-id = <OS_TENANT_ID>
region = <OS_REGION_NAME>

Copy the edited files to the running container: 

$ docker cp openshift.local.config/master/cloud.conf origin:/var/lib/origin/openshift.local.config/master/cloud.conf
$ docker cp openshift.local.config/master/master-config.yaml origin:/var/lib/origin/openshift.local.config/master/master-config.yaml
$ docker cp openshift.local.config/node-localhost/node-config.yaml origin:/var/lib/origin/openshift.local.config/node-localhost/node-config.yaml

Shutdown the cluster (this will save your conf persistently in a file in /home/centos/os_data_persisted) and restart it:

$ oc cluster down
$ oc cluster up --public-hostname=myos.myorg.com --routing-suffix=myos.myorg.com --service-catalog --host-data-dir=/home/centos/os_data_persisted --use-existing-config

Almost there. Now we need to tell Openshift we want to provide a storage class using Cinder through the Openstack API. This will allow us to dynamically provision volumes for our containers in Openshift.

Create the file storage-class.yaml so that it looks like this:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: default
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/cinder

Become cluster admin and create the storage class:

$ oc login -u system:admin 
$ oc create -f storage-class.yaml

That should be all. You can try it out by creating a PersistentVolumeClaim (PVC) and a Pod/Deployment using it. The volume should be created on Cinder at creation of the PVC, it should be mounted on the VM (and the container using it) on pod instantiation.

For example, looking at one of our PVCs:

$ oc describe pvc salt-key-volume-claim

Output:

Name: salt-key-volume-claim
Namespace: myproject
StorageClass: default
Status: Bound
Volume: pvc-191ffcda-8363-11e7-97d7-fa163e3b9d8a
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
 pv.kubernetes.io/bound-by-controller=yes
 volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/cinder
Capacity: 1Gi
Access Modes: RWO
Events:
 FirstSeen LastSeen Count From SubObjectPath Type Reason Message
 --------- -------- ----- ---- ------------- -------- ------ -------
 1m 1m 1 persistentvolume-controller Normal ProvisioningSucceeded Successfully provisioned volume pvc-191ffcda-8363-11e7-97d7-fa163e3b9d8a using kubernetes.io/cinder

If you have other needs (e.g., setting up your own customized routing) you can then “do as we do”: https://blog.zhaw.ch/icclab/openshift-custom-router-with-tcpsni-support/

Otherwise this is all folks.


Leave a Reply

Your email address will not be published. Required fields are marked *